1,635 research outputs found
Properties of ABA+ for Non-Monotonic Reasoning
We investigate properties of ABA+, a formalism that extends the well studied
structured argumentation formalism Assumption-Based Argumentation (ABA) with a
preference handling mechanism. In particular, we establish desirable properties
that ABA+ semantics exhibit. These pave way to the satisfaction by ABA+ of some
(arguably) desirable principles of preference handling in argumentation and
nonmonotonic reasoning, as well as non-monotonic inference properties of ABA+
under various semantics.Comment: This is a revised version of the paper presented at the worksho
Planning with Incomplete Information
Planning is a natural domain of application for frameworks of reasoning about
actions and change. In this paper we study how one such framework, the Language
E, can form the basis for planning under (possibly) incomplete information. We
define two types of plans: weak and safe plans, and propose a planner, called
the E-Planner, which is often able to extend an initial weak plan into a safe
plan even though the (explicit) information available is incomplete, e.g. for
cases where the initial state is not completely known. The E-Planner is based
upon a reformulation of the Language E in argumentation terms and a natural
proof theory resulting from the reformulation. It uses an extension of this
proof theory by means of abduction for the generation of plans and adopts
argumentation-based techniques for extending weak plans into safe plans. We
provide representative examples illustrating the behaviour of the E-Planner, in
particular for cases where the status of fluents is incompletely known.Comment: Proceedings of the 8th International Workshop on Non-Monotonic
Reasoning, April 9-11, 2000, Breckenridge, Colorad
Spherical clustering of users navigating 360{\deg} content
In Virtual Reality (VR) applications, understanding how users explore the
omnidirectional content is important to optimize content creation, to develop
user-centric services, or even to detect disorders in medical applications.
Clustering users based on their common navigation patterns is a first direction
to understand users behaviour. However, classical clustering techniques fail in
identifying these common paths, since they are usually focused on minimizing a
simple distance metric. In this paper, we argue that minimizing the distance
metric does not necessarily guarantee to identify users that experience similar
navigation path in the VR domain. Therefore, we propose a graph-based method to
identify clusters of users who are attending the same portion of the spherical
content over time. The proposed solution takes into account the spherical
geometry of the content and aims at clustering users based on the actual
overlap of displayed content among users. Our method is tested on real VR user
navigation patterns. Results show that our solution leads to clusters in which
at least 85% of the content displayed by one user is shared among the other
users belonging to the same cluster.Comment: 5 pages, conference (Published in: ICASSP 2019 - 2019 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Justifying Answer Sets using Argumentation
An answer set is a plain set of literals which has no further structure that
would explain why certain literals are part of it and why others are not. We
show how argumentation theory can help to explain why a literal is or is not
contained in a given answer set by defining two justification methods, both of
which make use of the correspondence between answer sets of a logic program and
stable extensions of the Assumption-Based Argumentation (ABA) framework
constructed from the same logic program. Attack Trees justify a literal in
argumentation-theoretic terms, i.e. using arguments and attacks between them,
whereas ABA-Based Answer Set Justifications express the same justification
structure in logic programming terms, that is using literals and their
relationships. Interestingly, an ABA-Based Answer Set Justification corresponds
to an admissible fragment of the answer set in question, and an Attack Tree
corresponds to an admissible fragment of the stable extension corresponding to
this answer set.Comment: This article has been accepted for publication in Theory and Practice
of Logic Programmin
Technical Report on the Learning of Case Relevance in Case-Based Reasoning with Abstract Argumentation
Case-based reasoning is known to play an important role in several legal
settings. In this paper we focus on a recent approach to case-based reasoning,
supported by an instantiation of abstract argumentation whereby arguments
represent cases and attack between arguments results from outcome disagreement
between cases and a notion of relevance. In this context, relevance is
connected to a form of specificity among cases. We explore how relevance can be
learnt automatically in practice with the help of decision trees, and explore
the combination of case-based reasoning with abstract argumentation (AA-CBR)
and learning of case relevance for prediction in legal settings. Specifically,
we show that, for two legal datasets, AA-CBR and decision-tree-based learning
of case relevance perform competitively in comparison with decision trees. We
also show that AA-CBR with decision-tree-based learning of case relevance
results in a more compact representation than their decision tree counterparts,
which could be beneficial for obtaining cognitively tractable explanations
SpArX: Sparse Argumentative Explanations for Neural Networks
Neural networks (NNs) have various applications in AI, but explaining their
decision process remains challenging. Existing approaches often focus on
explaining how changing individual inputs affects NNs' outputs. However, an
explanation that is consistent with the input-output behaviour of an NN is not
necessarily faithful to the actual mechanics thereof. In this paper, we exploit
relationships between multi-layer perceptrons (MLPs) and quantitative
argumentation frameworks (QAFs) to create argumentative explanations for the
mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining
as much of the original mechanics as possible. It then translates the sparse
MLP into an equivalent QAF to shed light on the underlying decision process of
the MLP, producing global and/or local explanations. We demonstrate
experimentally that SpArX can give more faithful explanations than existing
approaches, while simultaneously providing deeper insights into the actual
reasoning process of MLPs
- …